Closed Captioning: Getting Your Lines Right
As lecture capture and distance learning take hold in higher ed, colleges pursue different approaches to the issue of closed captioning and transcription.
- By Bridget McCrea
- 08/01/11
The rapid growth of lecture capture and distance education in higher education is raising fresh concerns about accessibility, since it's difficult--if not impossible--for hearing-impaired students to use these tools effectively. As a result, many colleges and universities are renewing their focus on closed captioning as a viable solution.
While the impetus for closed captioning stems from a desire to accommodate students with hearing issues, schools are also discovering that closed captioning has broader appeal, particularly among students for whom English is a second language. And for the rest of the students on campus, there's one other big benefit: It allows them to search captured content quickly, by enabling keyword searches.
Closing in on Automation
George Mason University (VA) believes the answer is yes. The school, whose population comprises an increasing number of students with disabilities--including veterans who are deaf or hard of hearing--already uses remote Communication Access Realtime Translation (CART) services in the classroom. This technology allows a deaf or hearing-impaired person to use a password and username to log onto the web and view a real-time text translation of what's being said in class. (The system requires a trained operator at a remote location to provide manual transcription.)
Now, the university is looking to replace that system with a more automated solution. It recently approved a "caption proposal" that will allow faculty to upload their files and have them captioned quickly.
"We’re going to be a multimedia capturing service right here in our office," says Kara Zirkle, IT accessibility coordinator and head of the school's Assistive Technology Initiative. Her team will use the Docsoft:AV audio/video search and closed captioning system. In use at the school for a few years, the server-based technology includes an integrated voice-recognition system.
Although George Mason's closed-captioning procedures are still under development, Zirkle envisions a time when faculty members will upload their captured lectures to a server. Then, using voice-recognition capabilities, the software will create a time-stamped transcription. "For a one-hour video, the system can produce a transcript within 45 minutes," asserts Zirkle.
Just how accurate the closed captioning will be remains to be seen. "It will really depend on the audio quality, the speaker’s accent, and other variables," says Zirkle. As a result, George Mason isn't quite ready to trust the entire captioning process to automation. Once transcribed, the file will be opened in Docsoft Transcript Editor, where student workers will edit the transcript and clean up the text to match the video. "This allows us to have a clean, time-stamped transcript that can be played in Windows Media Player or other preferred players," explains Zirkle.
In the long run, Zirkle believes her school's IT-centric approach will prove more affordable than using an outside service or graduate students to handle the bulk of the work. "The one-time purchase fee for Docsoft is much more economical than spending the minimum $150 for a single instance of CART services in the classroom," she says.
The cost factor is certainly an argument that resonates with Oklahoma State University. A new state law requiring that all videos be captioned left the school with a difficult choice: Either dramatically cut down on the number of videos it produced, or find an automated solution. Like George Mason, it opted for the automation solution offered by Docsoft. As the school moves to implement classroom capture systems, it expects to expand the system's use even further.
Not everyone feels speech-to-text technology is ready for educational prime time, though. "Having incorrect words in the closed captioning can have a profound impact in the educational setting, where accuracy is crucial," says Robert Wyatt, director of distance learning at Western Kentucky University, which uses a combination of Tegrity’s lecture capture software and student transcriptionists to get the job done.
"We picked the software based on its lecture capture capabilities, and then we started adding closed captioning to all of our lectures," says Wyatt. Before captured lectures are broadcast, a team of 10 students (who each work 20 hours per week) listens to the materials and transcribes them.
Wyatt feels the system is a cost-effective way to get the university's lectures translated into text that hearing-impaired students can utilize. "It’s not very expensive, and it takes about twice the time of the real lecture to get it done," he says. "Not only are we ensuring better accuracy than any speech-recognition program can provide, but we're also helping 10 students pay for their education."
A Hybrid Solution
The majority of schools appear to be taking approaches similar to Western Kentucky’s, utilizing systems that meld technology and good old human effort. At many of these institutions, though, the goal is to make the process so simple that it appears to be automated.
At Penn State University's College of Art and Architecture, for example, faculty capture video and lectures, and then simply upload them to 3Play Media's online transcription, captioning, and interactive-transcript service.
"We upload our videos and they come back transcribed within a day or two," says Keith Bailey, director of the college's e-Learning Institute, which has been using the system for about a year.
Once the transcribed files are returned, an instructional designer retrieves the embedded code from 3Play Media's server and "drops it" into the course content. "When students view the video, the course, or other multimedia, they can just hit the 'closed captioned' button and download the transcript--all without much effort," explains Bailey.
Gallaudet University, a school for deaf and hearing-impaired students in Washington, DC, is pursuing a similar tack. Earl Parks, Jr., executive director of Gallaudet Technology Services (and who is deaf himself), worked with Echo360, the school's lecture capture provider, to hone the product to include features that enable a seamless transcribing process.
Gallaudet currently has 11 classrooms equipped with Echo360, and two self-service lecture capture studios where faculty and students create learning objects, video-based projects, and online lectures for distance-learning courses. The captured content is made accessible through the school's Blackboard LMS (via a URL that is automatically assigned). If a teacher wants a transcript and closed captioning for any content, he can request it at the push of a button.
That's when the setup's manual component kicks in. "The file is sent out to transcription services, which do the work and put the content back where it belongs without us having to getting involved," explains Parks.
Ironically, Gallaudet may have less need for closed captioning than most other schools. A high percentage of its lectures are held in American Sign Language, with no audio at all. "We’re kind of in a catch-22 situation here," says Parks. "In many cases, for us to caption our content, we first have to interpret it and then send it off for transcription. Because of that, most of our lectures are not captioned."
Nevertheless, Parks sees closed captioning as an important process for all colleges. The benefits, he says, extend to students for whom English is a second language, and even university courses that include difficult terms and verbiage.
"Closed captioning is really about universal access, and universities need to keep this in mind as they set up their systems," notes Parks. "Making the process seamless is also critical, so that deaf and hard-of-hearing students don't stand out and feel as if they're being focused on. They want to feel that they're part of the learning environment."
The Search Side Effect
The benefits of closed captioning for hearing-impaired students are obvious. But it's also proving to have real value for the entire campus community, by enabling better search capabilities. Using products such as Echo360, Sonic Foundry's Mediasite, and McGraw-Hill's Tegrity, for example, students can skip to any part of a professor's oral presentation via keyword search--they no longer need to wade through the entire thing or rely on a search of accompanying lecture notes or a PowerPoint presentation.
"The content's value is definitely limited unless it can be both transcribed and searchable," says Tole Khesin of 3Play Media in Cambridge, MA. "Using closed captioning, students can jump to any specific part of a video to find what they need." |